# Multi-round Fine-tuning
T5 Grammar Corruption
Apache-2.0
A grammar correction model fine-tuned based on the t5-base model for detecting and correcting grammatical errors in text
Machine Translation
Transformers

T
juancavallotti
19
1
Nick Asr V2
nick_asr_v2 is an automatic speech recognition (ASR) model, fine-tuned on an unknown dataset, achieving a loss value of 1.4562, a word error rate of 0.6422, and a character error rate of 0.2409 on the evaluation set.
Speech Recognition
Transformers

N
ntoldalagi
18
0
Ascend
ascend is a fine-tuned model based on GleamEyeBeast/ascend, primarily used for speech recognition tasks, achieving a word error rate of 0.6412 and a character error rate of 0.2428 on the evaluation set.
Large Language Model
Transformers

A
GleamEyeBeast
17
0
Wav2vec2 Base Checkpoint 12
Apache-2.0
This model is a fine-tuned version based on wav2vec2-base-checkpoint-11.1 on the Common Voice dataset, primarily used for speech recognition tasks.
Speech Recognition
Transformers

W
jiobiala24
16
0
Deberta V3 Base Goemotions
MIT
A text sentiment classification model fine-tuned based on microsoft/deberta-v3-base, trained on an unknown dataset with an evaluated F1 score of 0.4468
Text Classification
Transformers

D
mrm8488
81
1
Wav2vec2 Base Checkpoint 5
Apache-2.0
This model is a fine-tuned speech recognition model based on wav2vec2-base-checkpoint-4 on the common_voice dataset, supporting Automatic Speech Recognition (ASR) tasks.
Speech Recognition
Transformers

W
jiobiala24
16
0
Mrc Pretrained Roberta Large 1
KLUE-RoBERTa-large is a Korean pre-trained language model based on the RoBERTa architecture, developed by a Korean research team and optimized for Korean natural language processing tasks.
Large Language Model
Transformers

M
this-is-real
14
0
Wav2vec2 Base Checkpoint 10
Apache-2.0
A speech recognition model fine-tuned on the common_voice dataset based on wav2vec2-base-checkpoint-9, achieving a word error rate of 0.3292 on the evaluation set
Speech Recognition
Transformers

W
jiobiala24
16
0
Arabertmo Base V4
AraBERTMo is an Arabic pre-trained language model based on BERT architecture, supporting masked language modeling tasks.
Large Language Model
Transformers Arabic

A
Ebtihal
15
0
Featured Recommended AI Models